Introduction to Open Data Science - Course Project

About the project

HI, It is a brilliant opportunity to take this course. I’m especially looking forward to learning how to utilize R and GitHub efficiently in my research process. I found the course through the Weboodi course catalog that lists mandatory interdisciplinary Ph.D. courses.

Link to the GitHub repository page

# This is a so-called "R chunk" where you can write R code.

date()
## [1] "Tue Nov 17 20:14:52 2020"

The text continues here.


Chapter 2

Today is

date()
## [1] "Tue Nov 17 20:14:52 2020"

1. Getting the Data and Descriptive Statistics

Let’s start by reading the data in R.

learning2014 <- read.csv("/Users/marttikaila/IODS-project/data/learning2014.csv",row.names = 1)

Lets then provide some descriptive statistics

summary(learning2014)
##     Attitude         gender            Age             Stra      
##  Min.   :1.400   Min.   :0.0000   Min.   :17.00   Min.   :1.250  
##  1st Qu.:2.600   1st Qu.:0.0000   1st Qu.:21.00   1st Qu.:2.625  
##  Median :3.200   Median :1.0000   Median :22.00   Median :3.188  
##  Mean   :3.143   Mean   :0.6627   Mean   :25.51   Mean   :3.121  
##  3rd Qu.:3.700   3rd Qu.:1.0000   3rd Qu.:27.00   3rd Qu.:3.625  
##  Max.   :5.000   Max.   :1.0000   Max.   :55.00   Max.   :5.000  
##       Deep           Points           Surf      
##  Min.   :1.583   Min.   : 7.00   Min.   :1.583  
##  1st Qu.:3.333   1st Qu.:19.00   1st Qu.:2.417  
##  Median :3.667   Median :23.00   Median :2.833  
##  Mean   :3.680   Mean   :22.72   Mean   :2.787  
##  3rd Qu.:4.083   3rd Qu.:27.75   3rd Qu.:3.167  
##  Max.   :4.917   Max.   :33.00   Max.   :4.333

Based on the descriptive statistics, the sample contains relatively young people, and most individuals are female. Since I coded the gender variable as a binary variable that takes value one if the person is female, we can conclude based on the mean of gender variable that around 66 % of the sample are women. On average, an individual’s age is about 22.5 years. However, a median person in the sample is 3.5 years younger.

2. A Graphical Overview

Let’s start the graphical overview by plotting the histograms of our variables.

library(ggplot2)
library(gridExtra)

age <-  ggplot(learning2014, aes(x = Age)) +
  geom_histogram(aes(y = ..density..), 
                  color = "grey30", fill = "darkorange") +
  geom_density(alpha = .3, fill = "darkorange")

points <-  ggplot(learning2014, aes(x = Points)) +
  geom_histogram(aes(y = ..density..), 
                 color = "grey30", fill = "darkorange") +
  geom_density(alpha = .3, fill = "darkorange")

surf <-  ggplot(learning2014, aes(x = Surf)) +
  geom_histogram(aes(y = ..density..), 
                 color = "grey30", fill = "darkorange") +
  geom_density(alpha = .3, fill = "darkorange")

attitude <-  ggplot(learning2014, aes(x = Attitude)) +
  geom_histogram(aes(y = ..density..), 
                 color = "grey30", fill = "darkorange") +
  geom_density(alpha = .3, fill = "darkorange")

stra <-  ggplot(learning2014, aes(x = Stra)) +
  geom_histogram(aes(y = ..density..), 
                 color = "grey30", fill = "darkorange") +
  geom_density(alpha = .3, fill = "darkorange")

deep <-  ggplot(learning2014, aes(x = Deep)) +
  geom_histogram(aes(y = ..density..), 
                 color = "grey30", fill = "darkorange") +
  geom_density(alpha = .3, fill = "darkorange")


grid.arrange(age, points, surf, attitude, stra, deep  , ncol = 2)
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.

Based on the histograms, we can state that most of the variables have a sort of nice bell-shaped distribution. Using eyeballing statistics, it seems that only the distributions of age and exam scores variables are a bit more skewed. Age distribution has a long right tail. Also, the distribution of points is leaning more towards higher points.

Let then create some scatterplots that show the raw relationship between the points variable and other variables in the data

library(ggplot2)
library(gridExtra)

ggp1 <- ggplot(learning2014, aes(x=Age, y=Points)) + 
  geom_point()+
  geom_smooth(method=lm, color = "grey30", fill = "darkorange")

ggp2 <- ggplot(learning2014, aes(x=Surf, y=Points)) + 
  geom_point()+
  geom_smooth(method=lm, color = "grey30", fill = "darkorange")

ggp3 <- ggplot(learning2014, aes(x=Attitude, y=Points)) + 
  geom_point()+
  geom_smooth(method=lm, color = "grey30", fill = "darkorange")

ggp4 <- ggplot(learning2014, aes(x=Stra, y=Points)) + 
  geom_point()+
  geom_smooth(method=lm, color = "grey30", fill = "darkorange")

ggp5 <- ggplot(learning2014, aes(x=Deep, y=Points)) + 
  geom_point()+
  geom_smooth(method=lm, color = "grey30", fill = "darkorange")

grid.arrange(ggp1, ggp2, ggp3,  ggp4,  ggp5, ncol = 2)
## `geom_smooth()` using formula 'y ~ x'
## `geom_smooth()` using formula 'y ~ x'
## `geom_smooth()` using formula 'y ~ x'
## `geom_smooth()` using formula 'y ~ x'
## `geom_smooth()` using formula 'y ~ x'

The figure above shows the relationship between points variables and five other variables in the data. The figure hints that the exam points are positively associated with variables attitude and stra. In other words, individuals with higher attitudes and stra scores tend to do better in the exam. The scatterplots and the lines I fit on the top of the dots suggest that the association between attitude and exam score is more robust than the association between stra and exam scores.

On the contrary, the correlations between exam points and variables age, surf, and deep are either negative or zero. The figure provides strong suggestive evidence that there is no relationship between exam points and deep variables. However, the surf and age variables might be negatively associated with exam points.

3. Data analysis

I decide to pick the following three variables: attitude, stra, and age. I then fit the linear OLS regression model

linearMod <- lm(Points ~ Attitude + Stra +  Surf , data=learning2014)  
summary(linearMod)
## 
## Call:
## lm(formula = Points ~ Attitude + Stra + Surf, data = learning2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.1550  -3.4346   0.5156   3.6401  10.8952 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  11.0171     3.6837   2.991  0.00322 ** 
## Attitude      3.3952     0.5741   5.913 1.93e-08 ***
## Stra          0.8531     0.5416   1.575  0.11716    
## Surf         -0.5861     0.8014  -0.731  0.46563    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared:  0.2074, Adjusted R-squared:  0.1927 
## F-statistic: 14.13 on 3 and 162 DF,  p-value: 3.156e-08

Summary of the regression analysis results shows the variables attitude and Stra are positively associated with exam points while the surf variable has a negative relationship with exam performance. However, the summary also reveals that only the attitude regression coefficient is statistically significant at a 95 percent confidence level, although based on the F-statistic, all the variables are jointly significant. In any case, I only keep the attitude variable and re-run the analysis.

linearMod <- lm(Points ~ Attitude , data=learning2014)  
summary(linearMod)
## 
## Call:
## lm(formula = Points ~ Attitude, data = learning2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.9763  -3.2119   0.4339   4.1534  10.6645 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  11.6372     1.8303   6.358 1.95e-09 ***
## Attitude      3.5255     0.5674   6.214 4.12e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared:  0.1906, Adjusted R-squared:  0.1856 
## F-statistic: 38.61 on 1 and 164 DF,  p-value: 4.119e-09

4. Interpretation of results

We get relatively similar results with the model that includes just the attitude variables. The regression coefficient of the attitude variable and the fit of the model are smaller, but differences are tiny. There are a couple of ways to interpret the model. First, the regressor’s coefficient tells us how much does the expected exam score increases on average if the attitude score increases by one point. Hence, our model says that if the attitude score increases by 1 point expected exam score should increase by 3.5 points. Also, using the intercept term and regression coefficient together, we can calculate what the expected exam score conditional on an individual’s attitude score.

Second, R2 or the fit of the model tells us how large share in the variance of the exam points is explained by our model. In my case, R squared is around 0.185, meaning that around 18 percent of the variance in the exam scores can be explained by the model i.e attitude scores. Hence, considering that we are doing social science where models have relatively low fit compared to, for example, natural sciences, the attitude seems to do well at predicting exam scores.

5. Diagnostic plots

The model’s coefficient presented above can be interpreted as the best linear unbiased estimate, and the statistical tests that I examined are valid if the following conditions hold.

  1. Linearity
  2. Strict exogeneity
  3. No perfect multicollinearity
  4. Constant variance
  5. The error term ε is normally distributed

We can evaluate some of these assumptions by looking at some diagnostic plots that were asked for. Let us plot these figures

plot(lm(Points ~Attitude , data=learning2014))

5.1. Residuals vs Fitted values

The first figure plots relationship between the residuals of the model and fitted values. We get fitted value by using our model to predict the value of the outcome variable, whereas residual is the difference between the fitted value and observed value. The residuals vs fitted value figure can be used to evaluate whether the linearity or constant variance assumptions hold. If the assumptions hold, the dots in the figure should be evenly distributed around the horizontal line. However, the figure seems to the point that, on average, dots get closer to the line at the larger fitted values.

5.2. Normal QQ-plot

In the previous part of the exercise, we were using t-statistics and F-statistics to evaluate if our estimates were statistically significant. In a small sample, this is a valid approach if the error term of the model is normally distributed. In the second figure above, we are evaluating whether the normality assumption holds by comparing theoretical normal distribution to the observed distribution of residuals. If these distributions are entirely similar, all dots should along a 45-degree line that goes through the graph. In our case, some dots in the lowest and highest decile are off the line, indicating that the error term’s distribution is a bit skewed.

5.3. Residuals vs Leverage

Finally, we want to check whether there are any influential outlier observations that might significantly impact estimates. OLS estimator finds an estimate that minimizes the sum of squared residuals of the model. Because residuals are squared, the outliers that are far away from other observations might significantly impact estimates and create bias. In the last figure, we test how influential every single observation is. Based on the figure, it turns out that outliers do not seem to be a big problem.


Chapter 3

Today is

date()
## [1] "Tue Nov 17 20:14:59 2020"

1. Create a new R Markdown file

Done!

2. Read the joined student alcohol consumption data and provide description

Let’s start by reading the data in R.

joinedData <- read.csv("/Users/marttikaila/IODS-project/data/joinedData.csv",row.names = 1)

Let’s star by providing glimpse to our data

library(dplyr)
## 
## Attaching package: 'dplyr'
## The following object is masked from 'package:gridExtra':
## 
##     combine
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
glimpse(joinedData) 
## Rows: 382
## Columns: 35
## $ school     <chr> "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP"…
## $ sex        <chr> "F", "F", "F", "F", "F", "M", "M", "F", "M", "M", "F", "F"…
## $ age        <int> 18, 17, 15, 15, 16, 16, 16, 17, 15, 15, 15, 15, 15, 15, 15…
## $ address    <chr> "U", "U", "U", "U", "U", "U", "U", "U", "U", "U", "U", "U"…
## $ famsize    <chr> "GT3", "GT3", "LE3", "GT3", "GT3", "LE3", "LE3", "GT3", "L…
## $ Pstatus    <chr> "A", "T", "T", "T", "T", "T", "T", "A", "A", "T", "T", "T"…
## $ Medu       <int> 4, 1, 1, 4, 3, 4, 2, 4, 3, 3, 4, 2, 4, 4, 2, 4, 4, 3, 3, 4…
## $ Fedu       <int> 4, 1, 1, 2, 3, 3, 2, 4, 2, 4, 4, 1, 4, 3, 2, 4, 4, 3, 2, 3…
## $ Mjob       <chr> "at_home", "at_home", "at_home", "health", "other", "servi…
## $ Fjob       <chr> "teacher", "other", "other", "services", "other", "other",…
## $ reason     <chr> "course", "course", "other", "home", "home", "reputation",…
## $ nursery    <chr> "yes", "no", "yes", "yes", "yes", "yes", "yes", "yes", "ye…
## $ internet   <chr> "no", "yes", "yes", "yes", "no", "yes", "yes", "no", "yes"…
## $ guardian   <chr> "mother", "father", "mother", "mother", "father", "mother"…
## $ traveltime <int> 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 3, 1, 2, 1, 1, 1, 3, 1, 1…
## $ studytime  <int> 2, 2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 3, 1, 2, 3, 1, 3, 2, 1, 1…
## $ failures   <int> 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0…
## $ schoolsup  <chr> "yes", "no", "yes", "no", "no", "no", "no", "yes", "no", "…
## $ famsup     <chr> "no", "yes", "no", "yes", "yes", "yes", "no", "yes", "yes"…
## $ paid       <chr> "no", "no", "no", "no", "no", "no", "no", "no", "no", "no"…
## $ activities <chr> "no", "no", "no", "yes", "no", "yes", "no", "no", "no", "y…
## $ higher     <chr> "yes", "yes", "yes", "yes", "yes", "yes", "yes", "yes", "y…
## $ romantic   <chr> "no", "no", "no", "yes", "no", "no", "no", "no", "no", "no…
## $ famrel     <int> 4, 5, 4, 3, 4, 5, 4, 4, 4, 5, 3, 5, 4, 5, 4, 4, 3, 5, 5, 3…
## $ freetime   <int> 3, 3, 3, 2, 3, 4, 4, 1, 2, 5, 3, 2, 3, 4, 5, 4, 2, 3, 5, 1…
## $ goout      <int> 4, 3, 2, 2, 2, 2, 4, 4, 2, 1, 3, 2, 3, 3, 2, 4, 3, 2, 5, 3…
## $ Dalc       <int> 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1…
## $ Walc       <int> 1, 1, 3, 1, 2, 2, 1, 1, 1, 1, 2, 1, 3, 2, 1, 2, 2, 1, 4, 3…
## $ health     <int> 3, 3, 3, 5, 5, 5, 3, 1, 1, 5, 2, 4, 5, 3, 3, 2, 2, 4, 5, 5…
## $ absences   <int> 5, 3, 8, 1, 2, 8, 0, 4, 0, 0, 1, 2, 1, 1, 0, 5, 8, 3, 9, 5…
## $ G1         <int> 2, 7, 10, 14, 8, 14, 12, 8, 16, 13, 12, 10, 13, 11, 14, 16…
## $ G2         <int> 8, 8, 10, 14, 12, 14, 12, 9, 17, 14, 11, 12, 14, 11, 15, 1…
## $ G3         <int> 8, 8, 11, 14, 12, 14, 12, 10, 18, 14, 12, 12, 13, 12, 16, …
## $ alc_use    <dbl> 1.0, 1.0, 2.5, 1.0, 1.5, 1.5, 1.0, 1.0, 1.0, 1.0, 1.5, 1.0…
## $ high_use   <lgl> FALSE, FALSE, TRUE, FALSE, FALSE, FALSE, FALSE, FALSE, FAL…

So we have 382 separate observations and 35 variables measuring different things. The data have roughly two categories of variables. First, there are variables measuring background characteristics that only related to individual. These are, for example, individual’s sex, age, school performance and alcohol consumption. Second, there are variables that are related to individuals parental background. These are, for example, family size, parents’ cohabitation status, and parental education status.

summary(joinedData)
##     school              sex                 age          address         
##  Length:382         Length:382         Min.   :15.00   Length:382        
##  Class :character   Class :character   1st Qu.:16.00   Class :character  
##  Mode  :character   Mode  :character   Median :17.00   Mode  :character  
##                                        Mean   :16.59                     
##                                        3rd Qu.:17.00                     
##                                        Max.   :22.00                     
##    famsize            Pstatus               Medu            Fedu      
##  Length:382         Length:382         Min.   :0.000   Min.   :0.000  
##  Class :character   Class :character   1st Qu.:2.000   1st Qu.:2.000  
##  Mode  :character   Mode  :character   Median :3.000   Median :3.000  
##                                        Mean   :2.806   Mean   :2.565  
##                                        3rd Qu.:4.000   3rd Qu.:4.000  
##                                        Max.   :4.000   Max.   :4.000  
##      Mjob               Fjob              reason            nursery         
##  Length:382         Length:382         Length:382         Length:382        
##  Class :character   Class :character   Class :character   Class :character  
##  Mode  :character   Mode  :character   Mode  :character   Mode  :character  
##                                                                             
##                                                                             
##                                                                             
##    internet           guardian           traveltime      studytime    
##  Length:382         Length:382         Min.   :1.000   Min.   :1.000  
##  Class :character   Class :character   1st Qu.:1.000   1st Qu.:1.000  
##  Mode  :character   Mode  :character   Median :1.000   Median :2.000  
##                                        Mean   :1.448   Mean   :2.037  
##                                        3rd Qu.:2.000   3rd Qu.:2.000  
##                                        Max.   :4.000   Max.   :4.000  
##     failures       schoolsup            famsup              paid          
##  Min.   :0.0000   Length:382         Length:382         Length:382        
##  1st Qu.:0.0000   Class :character   Class :character   Class :character  
##  Median :0.0000   Mode  :character   Mode  :character   Mode  :character  
##  Mean   :0.2016                                                           
##  3rd Qu.:0.0000                                                           
##  Max.   :3.0000                                                           
##   activities           higher            romantic             famrel     
##  Length:382         Length:382         Length:382         Min.   :1.000  
##  Class :character   Class :character   Class :character   1st Qu.:4.000  
##  Mode  :character   Mode  :character   Mode  :character   Median :4.000  
##                                                           Mean   :3.937  
##                                                           3rd Qu.:5.000  
##                                                           Max.   :5.000  
##     freetime        goout            Dalc            Walc           health     
##  Min.   :1.00   Min.   :1.000   Min.   :1.000   Min.   :1.000   Min.   :1.000  
##  1st Qu.:3.00   1st Qu.:2.000   1st Qu.:1.000   1st Qu.:1.000   1st Qu.:3.000  
##  Median :3.00   Median :3.000   Median :1.000   Median :2.000   Median :4.000  
##  Mean   :3.22   Mean   :3.113   Mean   :1.482   Mean   :2.296   Mean   :3.573  
##  3rd Qu.:4.00   3rd Qu.:4.000   3rd Qu.:2.000   3rd Qu.:3.000   3rd Qu.:5.000  
##  Max.   :5.00   Max.   :5.000   Max.   :5.000   Max.   :5.000   Max.   :5.000  
##     absences          G1              G2              G3           alc_use     
##  Min.   : 0.0   Min.   : 2.00   Min.   : 4.00   Min.   : 0.00   Min.   :1.000  
##  1st Qu.: 1.0   1st Qu.:10.00   1st Qu.:10.00   1st Qu.:10.00   1st Qu.:1.000  
##  Median : 3.0   Median :12.00   Median :12.00   Median :12.00   Median :1.500  
##  Mean   : 4.5   Mean   :11.49   Mean   :11.47   Mean   :11.46   Mean   :1.889  
##  3rd Qu.: 6.0   3rd Qu.:14.00   3rd Qu.:14.00   3rd Qu.:14.00   3rd Qu.:2.500  
##  Max.   :45.0   Max.   :18.00   Max.   :18.00   Max.   :18.00   Max.   :5.000  
##   high_use      
##  Mode :logical  
##  FALSE:268      
##  TRUE :114      
##                 
##                 
## 

3. Choose the variables

I pick the following four variables:

  1. Student’s sex
  • Hypothesis: There is a lot of evidence that males have a higher inclination to engage in risky behavior such as binge drinking. It is fascinating to see whether this holds up in the data in use.
  1. Age
  • Hypothesis: Since most countries have legal drinking age, age affects the availability of alcohol, and hence, maybe drinking.
  1. Father’s education
  • Hypothesis: Father’s education could work as a proxy for socioeconomic status (ses). Apparently, especially binge/heavy drinking is associated with low ses. Thus, I investigate whether the father’s low ses is related to the child’s alcohol consumption.
  1. Absences
  • Hypothesis: Schools might have an incapacitation effect on alcohol consumption. This means that at least while children are in school during the day time, they can not consume alcohol. On the other, students may be absent since they drink too much alcohol. Thus, the direction of causality is unclear.

Let’s create the data that contain only these independent variables and then alcohol related outcomes.

library(tidyr); library(dplyr); library(ggplot2); library(GGally)

analysisAlc <- joinedData %>%
  select(sex, age, Fedu, absences, high_use)

analysisAlc$Fedu <- cut(analysisAlc$Fedu, breaks = c(-Inf, 0, 1, 2, 3, Inf), labels= c("None", "primaryLow", "primaryHigh", "secondary", "Tertiary")  )

analysisAlc$Fedu <- factor(analysisAlc$Fedu)

summary(analysisAlc)
##      sex                 age                 Fedu        absences   
##  Length:382         Min.   :15.00   None       :  2   Min.   : 0.0  
##  Class :character   1st Qu.:16.00   primaryLow : 77   1st Qu.: 1.0  
##  Mode  :character   Median :17.00   primaryHigh:105   Median : 3.0  
##                     Mean   :16.59   secondary  : 99   Mean   : 4.5  
##                     3rd Qu.:17.00   Tertiary   : 99   3rd Qu.: 6.0  
##                     Max.   :22.00                     Max.   :45.0  
##   high_use      
##  Mode :logical  
##  FALSE:268      
##  TRUE :114      
##                 
##                 
## 

4. Descriptive evidence

4.1 Histograms

library(tidyr); library(dplyr); library(ggplot2); library(GGally)

gather(analysisAlc) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar()

Based on the histogram, around 1/3 of the sample are high users of alcohol. Otherwise, the sample individuals are relatively young, and their fathers are somewhat educated. Few individuals are 19-21 years old. If these individuals are in the sample because of school retention, any associations we find between age and heavy drinking may be explained by other unobserved problems that are behind the school retention. The distribution of absences is very skewed. On the one hand, there are a lot of individuals who are almost never absent, and on the other hand, quite many individuals whose absence numbers are high.

4.2 Boxplots

library(tidyr); library(dplyr); library(ggplot2); library(GGally)

box1 <- ggplot(analysisAlc, aes(x = high_use  , y = absences, col = Fedu ))

box1 + geom_boxplot() + ylab("Absences")

Boxplots are showing many descriptive statistics of absences and age variables for high and low alcohol consumption groups. The line, which is in the middle of the box, reports the median value of the variable on the y-axis. The different boxes within the same box represent different sub-groups.

We can maybe see from the first figure that the median number of absences is larger among high alcohol users. This is in line with my fourth assumption. However, the first box figure also shows that fathers education does not seem to be related to

library(tidyr); library(dplyr); library(ggplot2); library(GGally)

box1 <- ggplot(analysisAlc, aes(x = high_use  , y = age, col =  Fedu  ))

box1 + geom_boxplot() + ylab("age")

The second box figure provides some evidence that the median age is higher among high alcohol users. The figure also points out that there is no relationship between age and father’s education. This is in line with my second assumption.

4.3 Cross tabulation

library(tidyr); library(dplyr); library(ggplot2);  library(knitr)

analysisAlc %>%
  group_by(Fedu, high_use) %>%
  summarise(n=n()) %>%
  spread(high_use, n) %>%
  kable()
Fedu FALSE TRUE
None 2 NA
primaryLow 53 24
primaryHigh 75 30
secondary 72 27
Tertiary 66 33
analysisAlc %>%
  group_by(Fedu) %>%
  summarise(mean_high_use=mean(high_use)) %>%
  kable()
Fedu mean_high_use
None 0.0000000
primaryLow 0.3116883
primaryHigh 0.2857143
secondary 0.2727273
Tertiary 0.3333333

The first table shows the number of high and low alcohol users by the father’s educational background. The second table shows the fraction of high alcohol users by the father’s educational background. Together the tables provide evidence that there is no relationship between high alcohol use and parental education. This is against my hypothesis.

library(tidyr); library(dplyr); library(ggplot2);  library(knitr)

analysisAlc %>%
  group_by(sex, high_use) %>%
  summarise(n=n()) %>%
  spread(high_use, n) %>%
  kable()
sex FALSE TRUE
F 156 42
M 112 72
analysisAlc %>%
  group_by(sex) %>%
  summarise(mean_high_use=mean(high_use)) %>%
  kable()
sex mean_high_use
F 0.2121212
M 0.3913043

Based on tables 3 and 4, it seems that high alcohol use is more common among males, which supports my first assumption.

5. Logistic regression

Let us fit the model and summarize the results

model <- glm(high_use ~ sex + absences + age + Fedu,  data = analysisAlc, family = "binomial")

# summarize the model 
summary(model)
## 
## Call:
## glm(formula = high_use ~ sex + absences + age + Fedu, family = "binomial", 
##     data = analysisAlc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.2790  -0.8479  -0.6231   1.0561   2.1848  
## 
## Coefficients:
##                  Estimate Std. Error z value Pr(>|z|)    
## (Intercept)     -18.41011  623.86680  -0.030    0.976    
## sexM              1.00410    0.24205   4.148 3.35e-05 ***
## absences          0.09335    0.02326   4.014 5.98e-05 ***
## age               0.16813    0.10351   1.624    0.104    
## FeduprimaryLow   13.90961  623.86478   0.022    0.982    
## FeduprimaryHigh  13.70235  623.86475   0.022    0.982    
## Fedusecondary    13.59776  623.86476   0.022    0.983    
## FeduTertiary     13.96070  623.86475   0.022    0.982    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 423.95  on 374  degrees of freedom
## AIC: 439.95
## 
## Number of Fisher Scoring iterations: 13
coef(model)
##     (Intercept)            sexM        absences             age  FeduprimaryLow 
##    -18.41010883      1.00410227      0.09335241      0.16813210     13.90961364 
## FeduprimaryHigh   Fedusecondary    FeduTertiary 
##     13.70235172     13.59776078     13.96070193

We find that all the coefficients are positive, meaning that if an individual is a male, age increases, or the number of absences increases, the probability of high alcohol consumption increases. However, not that the effect of age is not significant. Also, it isn’t easy to interpret what is the impact of a father’s education. However, we can at least conclude that it is not statistically significant.

In order to talk say something more than what is the direction of efffect, let us derive odds ratios for the variables.

# Let's get the odds ratios an the confidence intervals 
OR <- coef(model) %>% exp
CI <- confint(model) %>% exp
## Waiting for profiling to be done...
cbind(OR, CI)
##                           OR        2.5 %       97.5 %
## (Intercept)     1.010628e-08           NA 1.713949e+33
## sexM            2.729456e+00 1.708562e+00 4.421144e+00
## absences        1.097849e+00 1.051069e+00 1.151547e+00
## age             1.183093e+00 9.669145e-01 1.452162e+00
## FeduprimaryLow  1.098673e+06 3.612856e-37           NA
## FeduprimaryHigh 8.930088e+05 2.829941e-37           NA
## Fedusecondary   8.043267e+05 2.570223e-37           NA
## FeduTertiary    1.156261e+06 3.630630e-37           NA

Based on the Confidence intervals age variable is not statistically significant. Also, it seems that the father’s educational background variable is problematic since there is no variation in our outcome variable in some of the education categories. Hence, I drop these two variables and fit the model again.

# Let's get the odds ratios an the confidence intervals 


model2 <- glm(high_use ~ sex + absences,  data = analysisAlc, family = "binomial")

OR <- coef(model2) %>% exp
CI <- confint(model2) %>% exp
## Waiting for profiling to be done...
cbind(OR, CI)
##                   OR     2.5 %    97.5 %
## (Intercept) 0.159445 0.1012577 0.2427684
## sexM        2.658116 1.6710354 4.2863129
## absences    1.101409 1.0549317 1.1548057

Now we find that that odds ratio for the male indicator is statistically significant and takes a value 2.6. Interpretation of this is that if a person is a male, the odds of high alcohol consumption is 2.6 times greater when absences are held constant.

The interpretation of the absences’ odds ratio coefficient is a bit different because it is a continuous measure. The odds ratio takes value 1.1 and means that if absences increase by one, the odds of high alcohol usage increases by a factor of 1.1. This implies that if absences increase by n times, the odd of high alcohol usage increases by 1.1^n

Hence, the statistical model seems to confirm two of my hypothesis and reject the other two. However, it is essential to keep in mind that all relationships found above are associations and not causal.

6.Predictions

6.1 Tables and figures

library(dplyr); library(ggplot2); library(visreg); library(caret); library(sjlabelled)
## Loading required package: lattice
## 
## Attaching package: 'sjlabelled'
## The following object is masked from 'package:dplyr':
## 
##     as_label
model2 <- glm(high_use ~ sex + absences, data = analysisAlc, family = "binomial")

# Create propensity score
probs <- predict(model2, type = "response")

# Add propensity score back to the data 
analysisAlc<- mutate(analysisAlc, probability = probs)

# Use propebsity scores to predict  high_use
analysisAlc <- mutate(analysisAlc, prediction = probability > 0.5)

# Lets summarize and tabulate the high_use variable and our prediction. 
table(high_use = analysisAlc$high_use, prediction = analysisAlc$prediction)
##         prediction
## high_use FALSE TRUE
##    FALSE   258   10
##    TRUE     88   26
table(high_use = analysisAlc$high_use, prediction = analysisAlc$prediction) %>%
    prop.table() %>% addmargins
##         prediction
## high_use      FALSE       TRUE        Sum
##    FALSE 0.67539267 0.02617801 0.70157068
##    TRUE  0.23036649 0.06806283 0.29842932
##    Sum   0.90575916 0.09424084 1.00000000
# initialize a plot of 'high_use' versus 'probability' in 'alc'
g <- ggplot(analysisAlc, aes(x = probability, y = high_use , col = prediction))
g +  geom_point()

Based on the figure and confusion matrix, it seems that our model produces quite many false-negative predictions meaning that our model says that the person is not a high user of alcohol when the person actually is a high user.

Let me also draw these cool figures that show how the marginal probability of high user evolves as a function of absences and how this relationship varies between gender.

# define the geom as points and draw the plot



visreg(model2 , "absences", 
       gg = TRUE, 
       scale="response") +
  labs(y = "Prob(High Use)", 
       x = "absences",
       title = "Relationship of absences and High Use",
       subtitle = "controlling for sex"
       )

visreg(model2 , "absences", 
       gg = TRUE, 
       by = "sex",
       scale="response") +
  labs(y = "Prob(High Use)", 
       x = "absences",
       title = "Relationship of absences and High Use",
       subtitle = "controlling for sex"
       )

6.2 Prediction errors

loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}
loss_func(class = analysisAlc$high_use, prob = analysisAlc$probability)
## [1] 0.2565445

It seems that 25 percent of our predictions are wrong. Let’s compare this to a strategy in which we say that every third person is a high alcohol user.

loss_func(class = analysisAlc$high_use, prob = 0.333)
## [1] 0.2984293

With this model, around 1/3 of our predictions are wrong. Hence, the proper statistical model does just a little bit better than the random guessing strategy.


Chapter 4

Today is

date()
## [1] "Tue Nov 17 20:15:06 2020"

Analysis exercise

1. Create new Rmarkdown file

done!

2. Load the Boston Data

Let us load the data and then summarize and provide a glimpse into the data.

library(dplyr); library (MASS)
glimpse(Boston) 
## Rows: 506
## Columns: 14
## $ crim    <dbl> 0.00632, 0.02731, 0.02729, 0.03237, 0.06905, 0.02985, 0.08829…
## $ zn      <dbl> 18.0, 0.0, 0.0, 0.0, 0.0, 0.0, 12.5, 12.5, 12.5, 12.5, 12.5, …
## $ indus   <dbl> 2.31, 7.07, 7.07, 2.18, 2.18, 2.18, 7.87, 7.87, 7.87, 7.87, 7…
## $ chas    <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
## $ nox     <dbl> 0.538, 0.469, 0.469, 0.458, 0.458, 0.458, 0.524, 0.524, 0.524…
## $ rm      <dbl> 6.575, 6.421, 7.185, 6.998, 7.147, 6.430, 6.012, 6.172, 5.631…
## $ age     <dbl> 65.2, 78.9, 61.1, 45.8, 54.2, 58.7, 66.6, 96.1, 100.0, 85.9, …
## $ dis     <dbl> 4.0900, 4.9671, 4.9671, 6.0622, 6.0622, 6.0622, 5.5605, 5.950…
## $ rad     <int> 1, 2, 2, 3, 3, 3, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 4, 4, 4, 4…
## $ tax     <dbl> 296, 242, 242, 222, 222, 222, 311, 311, 311, 311, 311, 311, 3…
## $ ptratio <dbl> 15.3, 17.8, 17.8, 18.7, 18.7, 18.7, 15.2, 15.2, 15.2, 15.2, 1…
## $ black   <dbl> 396.90, 396.90, 392.83, 394.63, 396.90, 394.12, 395.60, 396.9…
## $ lstat   <dbl> 4.98, 9.14, 4.03, 2.94, 5.33, 5.21, 12.43, 19.15, 29.93, 17.1…
## $ medv    <dbl> 24.0, 21.6, 34.7, 33.4, 36.2, 28.7, 22.9, 27.1, 16.5, 18.9, 1…
summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08205   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

The data contain 506 observations, which each apparently represent different census tracts in Boston. The data have 14 variables that describe various characteristics of these census tracts like crime rate, tax rate, and a median value of homes in the census tract.

3. Graphical overview

I start by drawing some scatter plots.

library(tidyr); library(dplyr); library(ggplot2); library(GGally)
gather(Boston) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar()

3.1 Histograms

Since the data contain so many variables, most of the histograms are relatively small. However, we can at least say that there seems to be quite a bit of variation in age, nox and medv variables. ### 3.1 Scatter and correlations plots

library("ggplot2"); library("GGally");library(corrplot)
pairs(Boston)

cMatrix = cor(Boston) %>% round(digits=2)
corrplot(cMatrix, method = "ellipse", type= "upper")

Since the correlation plot with scatter plots is messy, I will concentrate just on the latter figure, which shows only the correlations between variables. Based on the figure, some variables like lower statuts of the population (lstat) and median value of homes (medv) are very negatively correlated, some variables like proportion of non-retail business (indus) and emissons (nox) are very positively correlated, whereas between some variables like there do not exist any relationship.

4. Standardize the data

4.1 Standardization

library("ggplot2"); library("GGally");library(corrplot)
bostonStand <-scale(Boston)
bostonStand <- as.data.frame(bostonStand)
summary(bostonStand)
##       crim                 zn               indus              chas        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648  
##       nox                rm               age               dis         
##  Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658  
##  1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049  
##  Median :-0.1441   Median :-0.1084   Median : 0.3171   Median :-0.2790  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617  
##  Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566  
##       rad               tax             ptratio            black        
##  Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033  
##  1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049  
##  Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332  
##  Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406  
##      lstat              medv        
##  Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 3.5453   Max.   : 2.9865

We can see that after the standardization procedure, each variable is centered around zero. (mean is zero). Let us then create a new categorical variable using the existing crime variable.

library("ggplot2"); library("GGally");library(corrplot); library(dplyr)
bins <- quantile(bostonStand$crim)
crime <-cut(bostonStand$crim, breaks = bins, include.lowest = TRUE, labels = c("low", "med_low", "med_high", "high") )
bostonStand <- dplyr::select(bostonStand , -crim)
summary(bostonStand)
##        zn               indus              chas              nox         
##  Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723   Min.   :-1.4644  
##  1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723   1st Qu.:-0.9121  
##  Median :-0.48724   Median :-0.2109   Median :-0.2723   Median :-0.1441  
##  Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723   3rd Qu.: 0.5981  
##  Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648   Max.   : 2.7296  
##        rm               age               dis               rad         
##  Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658   Min.   :-0.9819  
##  1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049   1st Qu.:-0.6373  
##  Median :-0.1084   Median : 0.3171   Median :-0.2790   Median :-0.5225  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617   3rd Qu.: 1.6596  
##  Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566   Max.   : 1.6596  
##       tax             ptratio            black             lstat        
##  Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033   Min.   :-1.5296  
##  1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049   1st Qu.:-0.7986  
##  Median :-0.4642   Median : 0.2746   Median : 0.3808   Median :-0.1811  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332   3rd Qu.: 0.6024  
##  Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406   Max.   : 3.5453  
##       medv        
##  Min.   :-1.9063  
##  1st Qu.:-0.5989  
##  Median :-0.1449  
##  Mean   : 0.0000  
##  3rd Qu.: 0.2683  
##  Max.   : 2.9865
bostonStand$crime <- crime
summary(bostonStand)
##        zn               indus              chas              nox         
##  Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723   Min.   :-1.4644  
##  1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723   1st Qu.:-0.9121  
##  Median :-0.48724   Median :-0.2109   Median :-0.2723   Median :-0.1441  
##  Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723   3rd Qu.: 0.5981  
##  Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648   Max.   : 2.7296  
##        rm               age               dis               rad         
##  Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658   Min.   :-0.9819  
##  1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049   1st Qu.:-0.6373  
##  Median :-0.1084   Median : 0.3171   Median :-0.2790   Median :-0.5225  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617   3rd Qu.: 1.6596  
##  Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566   Max.   : 1.6596  
##       tax             ptratio            black             lstat        
##  Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033   Min.   :-1.5296  
##  1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049   1st Qu.:-0.7986  
##  Median :-0.4642   Median : 0.2746   Median : 0.3808   Median :-0.1811  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332   3rd Qu.: 0.6024  
##  Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406   Max.   : 3.5453  
##       medv              crime    
##  Min.   :-1.9063   low     :127  
##  1st Qu.:-0.5989   med_low :126  
##  Median :-0.1449   med_high:126  
##  Mean   : 0.0000   high    :127  
##  3rd Qu.: 0.2683                 
##  Max.   : 2.9865

4.2 Train and test sets

Let us create training and test sets and then have a glimpse at both of them.

library("ggplot2");library(corrplot); library(dplyr)
nObs <- nrow(bostonStand)
indTrain <- sample(nObs , size = nObs * 0.8)
# Training set 
trainBoston <- bostonStand[indTrain,]
# Test set 
testBoston <- bostonStand[-indTrain,]

glimpse(trainBoston) 
## Rows: 404
## Columns: 14
## $ zn      <dbl> -0.48724019, -0.48724019, 3.58608778, -0.48724019, -0.4872401…
## $ indus   <dbl> 0.1156240, 1.2307270, -1.2327031, 1.0149946, 1.2307270, 1.014…
## $ chas    <dbl> -0.2723291, -0.2723291, -0.2723291, -0.2723291, -0.2723291, -…
## $ nox     <dbl> 0.1579678, 2.7296452, -1.1960462, 0.6584956, 2.7296452, 1.253…
## $ rm      <dbl> 0.98398626, -0.22008342, 2.23217669, -1.87105367, -1.93367668…
## $ age     <dbl> 0.796660955, 1.116389695, -1.256708066, 1.116389695, 0.963630…
## $ dis     <dbl> -0.7729187, -1.1283332, 0.6282713, -1.1694595, -1.1085299, -0…
## $ rad     <dbl> -0.9818712, -0.5224844, -0.6373311, 1.6596029, -0.5224844, 1.…
## $ tax     <dbl> -0.80241764, -0.03107419, -1.09315478, 1.52941294, -0.0310741…
## $ ptratio <dbl> 1.17530274, -1.73470120, -1.73470120, 0.80577843, -1.73470120…
## $ black   <dbl> 0.4406159, -2.0128627, 0.3954874, 0.2064297, 0.3837671, 0.237…
## $ lstat   <dbl> -0.98207574, 2.12110437, -1.23834016, -1.31535952, 2.36336527…
## $ medv    <dbl> 0.148654801, -0.949516961, 2.823409785, 2.986504601, -0.86253…
## $ crime   <fct> low, med_high, low, high, med_high, high, med_high, med_high,…
glimpse(testBoston) 
## Rows: 102
## Columns: 14
## $ zn      <dbl> -0.4872402, -0.4872402, -0.4872402, -0.4872402, -0.4872402, 2…
## $ indus   <dbl> -0.4368257, -0.4368257, -0.4368257, -0.4368257, -0.7545936, -…
## $ chas    <dbl> -0.2723291, -0.2723291, -0.2723291, -0.2723291, -0.2723291, -…
## $ nox     <dbl> -0.14407485, -0.14407485, -0.14407485, -0.14407485, -0.480636…
## $ rm      <dbl> -0.47769171, -0.51327297, -0.97582929, -0.26847393, -0.631402…
## $ age     <dbl> -0.24068118, 0.90678974, 0.60837625, 1.00626091, -0.25489135,…
## $ dis     <dbl> 0.43332522, 0.28710377, 0.31322322, -0.01673672, -0.19810072,…
## $ rad     <dbl> -0.6373311, -0.6373311, -0.6373311, -0.6373311, -0.5224844, -…
## $ tax     <dbl> -0.60068166, -0.60068166, -0.60068166, -0.60068166, -0.766817…
## $ ptratio <dbl> 1.17530274, 1.17530274, 1.17530274, 1.17530274, 0.34387304, -…
## $ black   <dbl> 0.440615895, 0.412465352, -0.583319029, -1.186967442, 0.22877…
## $ lstat   <dbl> -0.61518350, 0.51069953, 0.54010692, 1.07644175, -0.17407261,…
## $ medv    <dbl> -0.23189977, -0.75380318, -0.93864397, -0.98213592, -0.275391…
## $ crime   <fct> med_high, med_high, med_high, med_high, med_low, low, med_low…

5. Discriminant analysis

library("ggplot2");library(corrplot); library(dplyr)

lda.fit <- lda( crime ~ . , data = trainBoston )

lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

# target classes as numeric
classes <- as.numeric(trainBoston$crime)

plot(lda.fit , dimen = 2,  col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1)

6. Predict and cross tabulate

Let us first save the initial values of the crime variable of test data into a separate vector.

library("ggplot2");library(corrplot); library(dplyr)

# save the initial values of crime vectors 
orginalCrime <- testBoston$crime 

testBoston <- dplyr::select(testBoston, -crime)

Then let us predict the values of the crime variable in the test data using the model we found using the training data. Finally, we compare the predicted and actual values of the crime variable.

library("ggplot2");library(corrplot); library(dplyr)

lda.pred <- predict(lda.fit, newdata =  testBoston)

table(correct = orginalCrime, lda.pred$class )
##           
## correct    low med_low med_high high
##   low       15       5        0    0
##   med_low    3      16        8    0
##   med_high   0       5       18    1
##   high       0       0        0   31

The diagonal line from the left top corner to the bottom right corner shows the number of correct predictions. Based on the table, our model performs ok. The model does very well at predicting extreme values (low and high crimes) while there seem to be more wrong guesses when predicting values are closer to the median.

7. Distances and K-means algorithm

7.1 Distances

First we re-load the data and standardize the variables. Then I calculate euclidean distances between variables.

library(dplyr); library (MASS)
boston<-scale(Boston)
boston <- as.data.frame(boston)

distE <- dist(boston)
summary(distE )
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970

7.2 K-means algorithm

I first perform k-mean clustering with 7 clusters. Note that I pick the number of clusters just randomly.

library(dplyr); library (MASS)
kM <- kmeans(Boston, centers = 7)
pairs(boston[1:5], col = kM$cluster )

pairs(boston[6:10], col = kM$cluster )

pairs(boston[11:14], col = kM$cluster )

Let us then be a bit formal and find the optimal number of clusters by calculating the total of within-cluster sum of squares (WCSS).

library(dplyr); library (MASS); library("ggplot2");
set.seed(123)

k_max <- 10

twcss <- sapply(1:k_max, function(k){kmeans(boston, k)$tot.withinss})

qplot(x = 1:k_max, y = twcss, geom = 'line')

Based on the figure, it seems that the WCSS drops heavily when the number of clusters is 2. Let us perform the k-mean clustering using two clusters and plot the results

library(dplyr); library (MASS)
kM <- kmeans(Boston, centers = 2)
pairs(boston[1:5], col = kM$cluster )

pairs(boston[6:10], col = kM$cluster )

pairs(boston[11:14], col = kM$cluster )